Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
1.
37th International Conference on Image and Vision Computing New Zealand, IVCNZ 2022 ; 13836 LNCS:119-130, 2023.
Article in English | Scopus | ID: covidwho-2249304

ABSTRACT

Annotating medical images for disease detection is often tedious and expensive. Moreover, the available training samples for a given task are generally scarce and imbalanced. These conditions are not conducive for learning effective deep neural models. Hence, it is common to ‘transfer' neural networks trained on natural images to the medical image domain. However, this paradigm lacks in performance due to the large domain gap between the natural and medical image data. To address that, we propose a novel concept of Pre-text Representation Transfer (PRT). In contrast to the conventional transfer learning, which fine-tunes a source model after replacing its classification layers, PRT retains the original classification layers and updates the representation layers through an unsupervised pre-text task. The task is performed with (original, not synthetic) medical images, without utilizing any annotations. This enables representation transfer with a large amount of training data. This high-fidelity representation transfer allows us to use the resulting model as a more effective feature extractor. Moreover, we can also subsequently perform the traditional transfer learning with this model. We devise a collaborative representation based classification layer for the case when we leverage the model as a feature extractor. We fuse the output of this layer with the predictions of a model induced with the traditional transfer learning performed over our pre-text transferred model. The utility of our technique for limited and imbalanced data classification problem is demonstrated with an extensive five-fold evaluation for three large-scale models, tested for five different class-imbalance ratios for CT based COVID-19 detection. Our results show a consistent gain over the conventional transfer learning with the proposed method. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

2.
2022 International Conference on Emerging Trends in Computing and Engineering Applications, ETCEA 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2229710

ABSTRACT

Recently the chest imaging plays an important role in COVID-19 diagnosis compared to laboratory diagnosis such as RT-PCR. This paper investigates a robust algorithm to detect infected COVID-19 patients using computed tomography scans. The proposed algorithm utilizes a deep learning approach through applying an off shelf pretrained neural network to extract a feature map matrix from the first layer of convolutional neural network, which preserve the basic features related to geometry structures. Later the trained features are classified using a machine learning classifier as support vector machine algorithm to classify the tested images into two classes infected against not infected. The investigated algorithm was trained and tested using two open-source datasets. The results were experimented using five general pretrained off shelf CNN architectures, the performance of the pretrained algorithm was measured and evaluated for the extracted image features with popular five pretrained CNN and classification accuracy 90%. © 2022 IEEE.

3.
IEEE Transactions on Artificial Intelligence ; : 1-11, 2022.
Article in English | Scopus | ID: covidwho-2192073

ABSTRACT

Automatic diagnosis of COVID-19 using chest CT images is of great significance for preventing its spread. However, it is difficult to precisely identify COVID-19 due to the following problems: 1) the location and size of lesions can vary greatly in CT images;2) its unique characteristics are often imperceptible in imaging findings. To solve these problems, a Deep Dual Attention Network (<inline-formula><tex-math notation="LaTeX">$\textrm {D}

4.
KSII Transactions on Internet and Information Systems ; 16(11):3658-3679, 2022.
Article in English | Scopus | ID: covidwho-2163765

ABSTRACT

Classification of persons wearing and not wearing face masks in images has emerged as a new computer vision problem during the COVID-19 pandemic. In order to address this problem and scale up the research in this domain, in this paper a hybrid technique by employing ResNet-101 and multi-layer perceptron (MLP) classifier has been proposed. The proposed technique is tested and validated on a self-created face masks classification dataset and a standard dataset. On self-created dataset, the proposed technique achieved a classification accuracy of 97.3%. To embrace the proposed technique, six other state-of-the-art CNN feature extractors with six other classical machine learning classifiers have been tested and compared with the proposed technique. The proposed technique achieved better classification accuracy and 1-6% higher precision, recall, and F1 score as compared to other tested deep feature extractors and machine learning classifiers. Copyright © 2022 KSII.

5.
3rd International Workshop of Advances in Simplifying Medical Ultrasound, ASMUS 2022, held in Conjunction with 25th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2022 ; 13565 LNCS:3-12, 2022.
Article in English | EuropePMC | ID: covidwho-2059733

ABSTRACT

Artificial intelligence-based analysis of lung ultrasound imaging has been demonstrated as an effective technique for rapid diagnostic decision support throughout the COVID-19 pandemic. However, such techniques can require days- or weeks-long training processes and hyper-parameter tuning to develop intelligent deep learning image analysis models. This work focuses on leveraging ‘off-the-shelf’ pre-trained models as deep feature extractors for scoring disease severity with minimal training time. We propose using pre-trained initializations of existing methods ahead of simple and compact neural networks to reduce reliance on computational capacity. This reduction of computational capacity is of critical importance in time-limited or resource-constrained circumstances, such as the early stages of a pandemic. On a dataset of 49 patients, comprising over 20,000 images, we demonstrate that the use of existing methods as feature extractors results in the effective classification of COVID-19-related pneumonia severity while requiring only minutes of training time. Our methods can achieve an accuracy of over 0.93 on a 4-level severity score scale and provides comparable per-patient region and global scores compared to expert annotated ground truths. These results demonstrate the capability for rapid deployment and use of such minimally-adapted methods for progress monitoring, patient stratification and management in clinical practice for COVID-19 patients, and potentially in other respiratory diseases. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

6.
2022 International Conference on Innovations in Science, Engineering and Technology, ICISET 2022 ; : 350-355, 2022.
Article in English | Scopus | ID: covidwho-1901443

ABSTRACT

Twitter is deemed the most reliable and convenient microblogging platform for getting real-time news and information. During the COVID-19 pandemic, people are keen to share various information ranging from new cases, healthcare guidelines, medication, and vaccine news on Twitter. However, a major portion of the shared tweets is uninformative and misleading which may create mass panic. Hence, it is an important task to distinguish and label a COVID-19 tweet as informative or uninformative. Prior works mostly focused on various pretrained transformer models and different types of contextual feature extractors to address this task. However, most of the works applied these models one at a time and didn't employ any effective neural layer at the bottom to distill the tweet contexts effectively. Since a tweet may contain a multifarious context, therefore, representing a tweet using only one kind of feature extractor may not work well. To overcome this limitation, we present an approach that leverages an ensemble of various cutting-edge transformer models to capture the diverse contextual dimension of the tweets. We exploit the BERT, CTBERT, BERTweet, RoBERTa, and XLM-RoBERTa models in our proposed method. Next, we perform a pooling operation on those extracted embedding features to transform them into document embedding vectors. Then, we utilize a feed-forward neural architecture with a linear activation function for the classification task. To generate final prediction, we utilize the majority voting-driven ensemble technique. Experiments on WNUT-2020 COVID-19 English Tweet dataset manifested the efficacy of our method over other state-of-the-art methods. © 2022 IEEE.

7.
19th IEEE International Conference on Dependable, Autonomic and Secure Computing, 19th IEEE International Conference on Pervasive Intelligence and Computing, 7th IEEE International Conference on Cloud and Big Data Computing and 2021 International Conference on Cyber Science and Technology Congress, DASC/PiCom/CBDCom/CyberSciTech 2021 ; : 256-263, 2021.
Article in English | Scopus | ID: covidwho-1788643

ABSTRACT

COVID-19 has severe effects on several body organs, especially the lung. These severe effects result in features in the COVID-19 patients' Computed Tomography (CT) images distinct from other viral pneumonia. Although the primary diagnosis of COVID-19 is not primarily screened by CT, machine learning-based diagnosis systems early detect the COVID-19 lung abnormalities. Feature extraction is crucial for the success of traditional machine learning algorithms. Traditional machine learning algorithms utilize hand-crafted features to identify and classify patterns in an image. This paper utilizes the Gabor filters as the primary feature extractor for automated COVID-19 classification from lung CT images. We use a publicly available COVID-19 data-set of chest CT images to validate the performance and accuracy of the proposed model. The Gabor filter and other feature extractors with Random Forest classifiers achieved over 81% classification accuracy, the sensitivity of 81%, Specificity of 82%, and F1 score of 81%. © 2021 IEEE.

8.
2022 IEEE International Conference on Consumer Electronics, ICCE 2022 ; 2022-January, 2022.
Article in English | Scopus | ID: covidwho-1779086

ABSTRACT

Wearing a facial mask has become a must in our daily life due to the global COVID-19 pandemic. However, the performance of conventional face recognition systems severely degrades for faces occluded by masks. How to combat the effect of occlusion on face recognition is an important issue. However, the performance of existing methods developed for masked face recognition unpleasantly degrades when dealing with unmasked faces. To address this issue for real-world applications, where the gallery image or the probe image may be a masked or unmasked face, we propose the concept of balanced facial feature matching and, based on it, design a robust masked face recognition system. The matching is balanced because it is performed on features extracted from corresponding facial regions. The system consists of a classification network and two feature extractors. The classification network classifies an input face image into a masked face or an unmasked face. One feature extractor extracts the feature of a full face, and the other uses a guided perceptual loss to focus the feature extraction on the non-occluded part of the face. The system is tested on both synthetic and real data. The face verification accuracy is improved by 2.4% for the synthetically masked LFW dataset, 1.9% for the MFR2 dataset, and 5.4% for the RMFD dataset. The results further show that the system improves masked face recognition while preserving the performance of unmasked face recognition. © 2022 IEEE.

9.
4th International Seminar on Research of Information Technology and Intelligent Systems, ISRITI 2021 ; : 184-188, 2021.
Article in English | Scopus | ID: covidwho-1769654

ABSTRACT

Vaccination is one of the efforts to overcome the COVID-19 pandemic in many countries, including Indonesia. Various responses through social media have also come from diverse levels of Indonesian society regarding the COVID-19 vaccine. Sentiment analysis about vaccines on social media is one way to investigates public responses to these efforts. This study proposes to develop a model that can analyze these sentiments by classifying the public responses on Twitter into positive, negative, and neutral sentiment classes. One of the success factors in sentiment analysis is the selection of the appropriate feature extraction. In general, tweets contain a lot of non-standard words. The fastText is a feature extraction that can handle non-standard word representations so that vectors can be presented as other standard words. Therefore, the proposed tweet sentiment analysis model consists of the fastText as a feature extractor and SVM as a classifier. This study utilizes 832 Indonesian tweets for experiment purposes. Furthermore, another feature extractor, WORD2VEC, and another classifier, MNB, are also used for comparison. The experimental results show that the fastText-SVM model has outperformed others in terms of accuracy, i.e., 88.10%. © 2021 IEEE.

10.
2020 IEEE MIT Undergraduate Research Technology Conference, URTC 2020 ; 2020.
Article in English | Scopus | ID: covidwho-1722963

ABSTRACT

Deep convolutional neural network (CNN) assisted classification of images is one of the most discussed topic in recent years. Continuously innovation of neural network architectures is making it more correct and efficient every day. But training a neural network from scratch is a very time-consuming and requires a lot of sophisticated computational equipments and power. So, using some pre-trained neural network as feature extractor for any image classification task or 'Transfer Learning' is very popular approach that saves time and computational power for practical use of CNN s. In this paper an efficient way of building full model from any pre-trained model with high accuracy and low memory, is proposed using Knowledge Distillation. Using the distilled knowledge of the last layer of pretrained networks is passes through fully-connected layers with different hidden layers, followed by Softmax layer. The accuracies of student networks are mildly lesser than the whole models, but accuracy of student models clearly indicates the accuracy of the real network. In this way the best number of hidden layers for dense layer for that pretrained network with best accuracy and no-over-fitting can be found with less time. Here VGG16 and VGG19 (pre-trained on 'ImageNet' dataset) is tested on chest X-rays (pneumonia and COVID-19). For finding the best number of hidden layers total it saves nearly 44 minutes for VGG19 and 36 minutes 37 seconds for VGG16 feature extractor based CNN model. © 2020 IEEE.

11.
1st International Conference on Data Science, Machine Learning and Artificial Intelligence, DSMLAI 2021 ; : 20-25, 2021.
Article in English | Scopus | ID: covidwho-1673512

ABSTRACT

The prime objective of this research is to develop an automatic tool 'Lung-Infection Visualizer' for marking the Region of Infection and cropping of the marked region in chest radiographs. The tool is also integrated with the feature extractor, feature visualization algorithm, and deep learning-based classifier. Thus, it facilitates the radiology experts where they can easily mark the infected region and visualize the region of infection. In this manuscript, the authors employ the template-based and Brute Force approach of feature mapping. Further, they applied the ResNet, Faster Recurrent Neural Network, XceptionNet, and VGG-16 deep learning-based classifiers for classifying the chest radiographs into bacterial pneumonia, viral pneumonia, COVID-19, and Normal classes. The authors also fine-tune the model parameters and hyperparameters for optimizing the performance of the deep learning-based models. The comparison in the performance proves that the VGG-16 model reports the highest accuracy of 90.07% and outperforms the other models on the dataset of 5,499 chest radiographs used for this research. The cropping tool is registered as Intellectual Property Rights in the name of authors with the registration number SW-14092/2021. And the title 'AutoCrop Tool'. © 2021 ACM.

SELECTION OF CITATIONS
SEARCH DETAIL